skip to main content


Search for: All records

Creators/Authors contains: "Kunda, Maithilee"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Observations abound about the power of visual imagery in human intelligence, from how Nobel prize-winning physicists make their discoveries to how children understand bedtime stories. These observations raise an important question for cognitive science, which is, what are the computations taking place in someone’s mind when they use visual imagery? Answering this question is not easy and will require much continued research across the multiple disciplines of cognitive science. Here, we focus on a related and more circumscribed question from the perspective of artificial intelligence (AI): If you have an intelligent agent that uses visual imagery-based knowledge representations and reasoning operations, then what kinds of problem solving might be possible, and how would such problem solving work? We highlight recent progress in AI toward answering these questions in the domain of visuospatial reasoning, looking at a case study of how imagery-based artificial agents can solve visuospatial intelligence tests. In particular, we first examine several variations of imagery-based knowledge representations and problem-solving strategies that are sufficient for solving problems from the Raven’s Progressive Matrices intelligence test. We then look at how artificial agents, instead of being designed manually by AI researchers, might learn portions of their own knowledge and reasoning procedures from experience, including learning visuospatial domain knowledge, learning and generalizing problem-solving strategies, and learning the actual definition of the task in the first place.

     
    more » « less
  2. null (Ed.)
    The block design test (BDT), in which a person has to recreate a visual design using colored blocks, is notable among cognitive assessments because it makes so much of a person's problem-solving strategy ``visible'' through their ongoing manual actions. While, for decades, numerous pockets of research on the BDT have identified certain behavioral variables as being important for certain cognitive or neurological hypotheses, there is no unifying framework for bringing together this spread of variables and hypotheses. In this paper, we identify 25 independent and dependent variables that have been examined as part of published BDT studies across many areas of cognitive science and present a sample of the research on each one. We also suggest variables of interest for future BDT research, especially as made possible with the advent of advanced recording technologies like wearable eye trackers. 
    more » « less
  3. null (Ed.)
    Many cognitive assessments are limited by their reliance on relatively sparse measures of performance, like per-item accuracy or reaction time. Capturing more detailed behavioral measurements from cognitive assessments will enhance their utility in many settings, from individual clinical evaluations to large-scale research studies. We demonstrate the feasibility of combining scene and gaze cameras with supervised learning algorithms to automatically measure key behaviors on the block design test, a widely used test of visuospatial cognitive ability. We also discuss how this block-design measurement system could enhance the assessment of many critical cognitive and meta-cognitive functions such as attention, planning, progress monitoring, and strategy selection. 
    more » « less
  4. null (Ed.)
    Nonverbal task learning is defined here as a variant of interactive task learning in which an agent learns the definition of a new task without any verbal information such as task instructions. Instead, the agent must 1) learn the task definition using only a single solved example problem as its training input, and then 2) generalize this definition in order to successfully parse new problems. In this paper, we present a conceptual framework for nonverbal task learning, and we compare and contrast this type of learning with existing learning paradigms in AI. We also discuss nonverbal task learning in the context of nonverbal human intelligence tests, which are standardized tests designed to be given without any verbal instructions so that they can be used by people with language difficulties. 
    more » « less
  5. Understanding how a person thinks, i.e., measuring a single individual’s cognitive characteristics, is challenging because cognition is not directly observable. Practically speaking, standardized cognitive tests (tests of IQ, memory, attention, etc.), with results interpreted by expert clinicians, represent the state of the art in measuring a person’s cognition. Three areas of AI show particular promise for improving the effectiveness of this kind of cognitive testing: 1) behavioral sensing, to more robustly quantify individual test-taker behaviors, 2) data mining, to identify and extract meaningful patterns from behavioral datasets; and 3) cognitive modeling, to help map ob- served behaviors onto hypothesized cognitive strategies. We bring these three areas of AI research together in a unified conceptual framework and provide a sampling of recent work in each area. Continued research at the nexus of AI and cognitive testing has potentially far-reaching implications for society in virtually every context in which measuring cognition is important, including research across many disciplines of cognitive science as well as applications in clinical, educational, and workforce settings. 
    more » « less
  6. Do people have dispositions towards visual or verbal thinking styles, i.e., a tendency towards one default representational modality versus the other? The problem in trying to answer this question is that visual/verbal thinking styles are challenging to measure. Subjective, introspective measures are the most common but often show poor reliability and validity; neuroimaging studies can provide objective evidence but are intrusive and resource-intensive. In previous work, we observed that in order for a purely behavioral testing method to be able to objectively evaluate a person’s visual/verbal thinking style, 1) the task must be solvable equally well using either visual or verbal mental representations, and 2) it must offer a secondary behavioral marker, in addition to primary performance measures, that indicates which modality is being used. We collected four such tasks from the psychology literature and conducted a small pilot study with adult participants to see the extent to which visual/verbal thinking styles can be differentiated using an individual’s results on these tasks. 
    more » « less
  7. The Leiter International Performance Scale-Revised (Leiter-R) is a standardized cognitive test that seeks to "provide a nonverbal measure of general intelligence by sampling a wide variety of functions from memory to nonverbal reasoning." Understanding the computational building blocks of nonverbal cognition, as measured by the Leiter-R, is an important step towards understanding human nonverbal cognition, especially with respect to typical and atypical trajectories of child development. One subtest of the Leiter-R, Form Completion, involves synthesizing and localizing a visual figure from its constituent slices. Form Completion poses an interesting nonverbal problem that seems to combine several aspects of visual memory, mental rotation, and visual search. We describe a new computational cognitive model that addresses Form Completion using a novel, mental-rotation-friendly image representation that we call the Polar Augmented Resolution (PolAR) Picture, which enables high-fidelity mental rotation operations. We present preliminary results using actual Leiter-R test items and discuss directions for future work. 
    more » « less